Learn how to build an open LLM app using Hermes 2 Pro, a powerful LLM based on Meta's Llama 3 architecture. This tutorial explains how to deploy Hermes 2 Pro locally, create a function to track flight status using FlightAware API, and integrate it with the LLM.
Cerebrum 8x7b is a large language model (LLM) created specifically for reasoning tasks. It is based on the Mixtral 8x7b model. Similar to its smaller version, Cerebrum 7b, it is fine-tuned on a small custom dataset of native chain of thought data and further improved with targeted RLHF (tRLHF), a novel technique for sample-efficient LLM alignment. Unlike numerous other recent fine-tuning approaches, our training pipeline includes under 5000 training prompts and even fewer labeled datapoints for tRLHF.
Native chain of thought approach means that Cerebrum is trained to devise a tactical plan before tackling problems that require thinking. For brainstorming, knowledge intensive, and creative tasks Cerebrum will typically omit unnecessarily verbose considerations.
- 14 free colab notebooks providing hands-on experience in fine-tuning large language models (LLMs).
- The notebooks cover topics from efficient training methodologies like LoRA and Hugging Face to specialized models such as Llama, Guanaco, and Falcon.
- They also include advanced techniques like PEFT Finetune, Bloom-560m-tagger, and Meta_OPT-6–1b_Model.
efficient method for fine-tuning LLM using LoRA and QLoRA, making it possible to train them even on consumer hardware